Goto

Collaborating Authors

 Fountain Hills


Top 10 robotic stories of 2022 - The Robot Report

#artificialintelligence

In 2022 we saw big movements in the robotics industry, from high-profile lawsuits to big acquisitions to exciting new robots and deployments, there was no shortage of news to cover this year. Here are the top 10 most popular stories on The Robot Report in 2022. Subscribe to The Robot Report Newsletter or listen to The Robot Report Podcast to stay updated on the robotics stories you need to know about. Early this year, Amazon unveiled its first-ever autonomous mobile robot (AMR) Proteus. The company first entered the mobile robot space in 2012, when it acquired Kiva Systems for $775 million.


Improving the Performance of Online Neural Transducer Models

Sainath, Tara N., Chiu, Chung-Cheng, Prabhavalkar, Rohit, Kannan, Anjuli, Wu, Yonghui, Nguyen, Patrick, Chen, Zhifeng

arXiv.org Machine Learning

ABSTRACT Having a sequence-to-sequence model which can operate in an online fashion is important for streaming applications such as Voice Search. Neural transducer is a streaming sequence-to-sequence model, but has shown a significant degradation in performance compared to nonstreaming models such as Listen, Attend and Spell (LAS). Specifically, we look at increasing the window over which NT computes attention, mainly by looking backwards in time so the model still remains online. In addition, we explore initializing a NT model from a LAS-trained model so that it is guided with a better alignment. Finally, we explore including stronger language models such as using wordpiece models, and applying an external LM during the beam search. On a Voice Search task, we find with these improvements we can get NT to match the performance of LAS. 1. INTRODUCTION Sequence-to-sequence models have become popular in the automatic speech recognition (ASR) community [1, 2, 3, 4], as they allow for one neural network to jointly learn an acoutic, pronunciation and language model, greatly simplifying the ASR pipeline.


Announcing @TS_Embedded to Exhibit at @CloudExpo #IoT #IIoT #Embedded

#artificialintelligence

SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems' product base consists of a wide variety of off-the-shelf PC/104 single board computers, computer-on-modules, touch panel computers, peripherals and industrial controllers. They also offer custom configurations and design services.


Announcing @TS_Embedded to Exhibit at @ThingsExpo #IoT #AI #Embedded

#artificialintelligence

SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems' product base consists of a wide variety of off-the-shelf PC/104 single board computers, computer-on-modules, touch panel computers, peripherals and industrial controllers. They also offer custom configurations and design services.


Online Tensor Methods for Learning Latent Variable Models

Huang, Furong, Niranjan, U. N., Hakeem, Mohammad Umar, Anandkumar, Animashree

arXiv.org Machine Learning

We introduce an online tensor decomposition based approach for two latent variable modeling problems namely, (1) community detection, in which we learn the latent communities that the social actors in social networks belong to, and (2) topic modeling, in which we infer hidden topics of text articles. We consider decomposition of moment tensors using stochastic gradient descent. We conduct optimization of multilinear operations in SGD and avoid directly forming the tensors, to save computational and storage costs. We present optimized algorithm in two platforms. Our GPU-based implementation exploits the parallelism of SIMD architectures to allow for maximum speed-up by a careful optimization of storage and data transfer, whereas our CPU-based implementation uses efficient sparse matrix computations and is suitable for large sparse datasets. For the community detection problem, we demonstrate accuracy and computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic modeling problem, we also demonstrate good performance on the New York Times dataset. We compare our results to the state-of-the-art algorithms such as the variational method, and report a gain of accuracy and a gain of several orders of magnitude in the execution time.